Gemma/[Gemma_2]Using_with_Langfun_and_LlamaCpp_Python_Bindings.ipynb (940 lines of code) (raw):

{ "cells": [ { "cell_type": "markdown", "metadata": { "id": "Tce3stUlHN0L" }, "source": [ "##### Copyright 2024 Google LLC." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "cellView": "form", "id": "tuOe1ymfHZPu" }, "outputs": [], "source": [ "# @title Licensed under the Apache License, Version 2.0 (the \"License\");\n", "# you may not use this file except in compliance with the License.\n", "# You may obtain a copy of the License at\n", "#\n", "# https://www.apache.org/licenses/LICENSE-2.0\n", "#\n", "# Unless required by applicable law or agreed to in writing, software\n", "# distributed under the License is distributed on an \"AS IS\" BASIS,\n", "# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n", "# See the License for the specific language governing permissions and\n", "# limitations under the License." ] }, { "cell_type": "markdown", "metadata": { "id": "FyE6mpefBU_G" }, "source": [ "## Guide to using Gemma 2 with Langfun, PyGlove and llama-cpp-python\n", "\n", "[Langfun](https://github.com/google/langfun) is a versatile library designed to bridge the gap between natural language processing (NLP) and structured data manipulation. It allows you to effortlessly parse, interpret, and convert unstructured text into well-defined Python objects. This capability is invaluable for applications that require seamless integration between language models and data-driven systems.\n", "\n", "[PyGlove](https://github.com/google/pyglove) is a powerful library for flexible and scalable configuration management and automated machine learning (AutoML). It provides tools for defining complex schemas and object representations, enabling you to create highly customizable and maintainable code structures. PyGlove's compatibility with Langfun ensures that the objects you generate from natural language prompts are both robust and easy to use.\n", "\n", "[llama-cpp-python](https://github.com/abetlen/llama-cpp-python) provides Python bindings for the [llama.cpp](https://github.com/ggerganov/llama.cpp) C++ library. This allows you to enjoy the performance optimizations of `llama.cpp` while benefiting from the simplicity and flexibility of Python. With llama-cpp-python, you get a convenient API for loading models, generating text, and customizing inference parameters.\n", "\n", "[Gemma](https://ai.google.dev/gemma) is a family of lightweight, state-of-the-art open-source language models from Google. Built from the same research and technology used to create the Gemini models, Gemma models are text-to-text, decoder-only large language models (LLMs), available in English, with open weights, pre-trained variants, and instruction-tuned variants.\n", "Gemma models are well-suited for various text-generation tasks, including question-answering, summarization, and reasoning. Their relatively small size makes it possible to deploy them in environments with limited resources such as a laptop, desktop, or your cloud infrastructure, democratizing access to state-of-the-art AI models and helping foster innovation for everyone.\n", "\n", "By combining `Langfun`, `PyGlove`, `llama.cpp` and `Gemma`, you can create applications that not only understand and process natural language but also convert that understanding into structured, actionable data. This integration allows you to build sophisticated systems that can interact with external APIs, databases, and other services using well-defined objects, streamlining workflows and enhancing functionality.\n", "\n", "In this guide, you'll learn how to set up your environment, configure the Gemma 2 model, and implement various real-world use cases.\n", "\n", "<table align=\"left\">\n", " <td>\n", " <a target=\"_blank\" href=\"https://colab.research.google.com/github/google-gemini/gemma-cookbook/blob/main/Gemma/[Gemma_2]Using_with_Langfun_and_LlamaCpp_Python_Bindings.ipynb\"><img src=\"https://www.tensorflow.org/images/colab_logo_32px.png\" />Run in Google Colab</a>\n", " </td>\n", "</table>\n" ] }, { "cell_type": "markdown", "metadata": { "id": "FaqZItBdeokU" }, "source": [ "## Setup\n", "\n", "### Select the Colab runtime\n", "To complete this tutorial, you'll need to have a Colab runtime with sufficient resources to run the Gemma model. In this case, you can use a T4 GPU:\n", "\n", "1. In the upper-right of the Colab window, select **▾ (Additional connection options)**.\n", "2. Select **Change runtime type**.\n", "3. Under **Hardware accelerator**, select **T4 GPU**.\n", "\n", "### Gemma setup\n", "\n", "**Before you dive into the tutorial, let's get you set up with Gemma:**\n", "\n", "1. **Hugging Face Account:** If you don't already have one, you can create a free Hugging Face account by clicking [here](https://huggingface.co/join).\n", "2. **Gemma Model Access:** Head over to the [Gemma model page](https://huggingface.co/collections/google/gemma-2-release-667d6600fd5220e7b967f315) and accept the usage conditions.\n", "3. **Colab with Gemma Power:** For this tutorial, you'll need a Colab runtime with enough resources to handle the Gemma 9B model. Choose an appropriate runtime when starting your Colab session.\n", "4. **Hugging Face Token:** Generate a Hugging Face access (preferably `write` permission) token by clicking [here](https://huggingface.co/settings/tokens). You'll need this token later in the tutorial.\n", "\n", "**Once you've completed these steps, you're ready to move on to the next section where you'll set up environment variables in your Colab environment.**\n" ] }, { "cell_type": "markdown", "metadata": { "id": "CY2kGtsyYpHF" }, "source": [ "### Configure your HF token\n", "\n", "Add your Hugging Face token to the Colab Secrets manager to securely store it.\n", "\n", "1. Open your Google Colab notebook and click on the 🔑 Secrets tab in the left panel. <img src=\"https://storage.googleapis.com/generativeai-downloads/images/secrets.jpg\" alt=\"The Secrets tab is found on the left panel.\" width=50%>\n", "2. Create a new secret with the name `HF_TOKEN`.\n", "3. Copy/paste your token key into the Value input box of `HF_TOKEN`.\n", "4. Toggle the button on the left to allow notebook access to the secret.\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "hFhJSxWB5KXd" }, "outputs": [], "source": [ "import os\n", "from google.colab import userdata\n", "\n", "# Note: `userdata.get` is a Colab API. If you're not using Colab, set the env\n", "# vars as appropriate for your system.\n", "os.environ[\"HF_TOKEN\"] = userdata.get(\"HF_TOKEN\")" ] }, { "cell_type": "markdown", "metadata": { "id": "iwjo5_Uucxkw" }, "source": [ "### Install dependencies\n", "\n", "You'll need to install a few Python packages for Langfun and PyGlove and some dependencies to interact with HuggingFace along with `llama-cpp-python` and run the model. Find some of the releases supporting CUDA 12.2 [here](https://abetlen.github.io/llama-cpp-python/whl/cu122/llama-cpp-python/).\n", "\n", "Run the following cell to install or upgrade it:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "rIJ1hKwf5YXq" }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ " Preparing metadata (setup.py) ... \u001b[?25l\u001b[?25hdone\n", "\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m297.5/297.5 kB\u001b[0m \u001b[31m3.6 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n", "\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m70.1/70.1 kB\u001b[0m \u001b[31m3.8 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n", "\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m585.4/585.4 kB\u001b[0m \u001b[31m9.1 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n", "\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m244.3/244.3 kB\u001b[0m \u001b[31m13.2 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n", "\u001b[?25h Building wheel for termcolor (setup.py) ... \u001b[?25l\u001b[?25hdone\n", "\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m436.4/436.4 kB\u001b[0m \u001b[31m8.3 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n", "\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m443.8/443.8 MB\u001b[0m \u001b[31m3.4 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n", "\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m45.5/45.5 kB\u001b[0m \u001b[31m4.3 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n", "\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m94.6/94.6 kB\u001b[0m \u001b[31m8.9 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n", "\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m63.7/63.7 kB\u001b[0m \u001b[31m5.8 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n", "\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m58.3/58.3 kB\u001b[0m \u001b[31m5.5 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n", "\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m71.5/71.5 kB\u001b[0m \u001b[31m5.1 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n", "\u001b[?25h" ] } ], "source": [ "# The Langfun and PyGlove libraries\n", "!pip install -q langfun pyglove\n", "\n", "# The huggingface_hub library allows us to download models and other files from Hugging Face.\n", "!pip install --upgrade -q huggingface_hub\n", "\n", "# The llama-cpp-python[server] library allows us to leverage GPUs\n", "!pip install llama-cpp-python[server]==0.2.90 \\\n", " -q -U --extra-index-url https://abetlen.github.io/llama-cpp-python/whl/cu122" ] }, { "cell_type": "markdown", "metadata": { "id": "2_bahJBmwvSp" }, "source": [ "### Logging into Hugging Face Hub\n", "\n", "Next, you'll have to log into the Hugging Face Hub using your access token. This will allow us to download the Gemma model." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "pd08YwW75gvm" }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "The token has not been saved to the git credentials helper. Pass `add_to_git_credential=True` in this function directly or `--add-to-git-credential` if using via `huggingface-cli` if you want to set the git credential as well.\n", "Token is valid (permission: read).\n", "Your token has been saved to /root/.cache/huggingface/token\n", "Login successful\n" ] } ], "source": [ "from huggingface_hub import login\n", "\n", "login(os.environ[\"HF_TOKEN\"])" ] }, { "cell_type": "markdown", "metadata": { "id": "_-NAW6VXOeBt" }, "source": [ "### Downloading the Gemma 2 Model\n", "Once you're logged in, you can download the Gemma 2 model files from Hugging Face. The [Gemma 2 model](https://huggingface.co/bartowski/gemma-2-9b-it-GGUF) is available in **GGUF** format, which is optimized for use with `llama.cpp` and compatible tools like Llamafile." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "MuWXSuqFPSQW" }, "outputs": [ { "data": { "application/vnd.jupyter.widget-view+json": { "model_id": "eeca394ee1434512a5864b5ed211a441", "version_major": 2, "version_minor": 0 }, "text/plain": [ "gemma-2-9b-it-Q6_K.gguf: 0%| | 0.00/7.59G [00:00<?, ?B/s]" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "application/vnd.google.colaboratory.intrinsic+json": { "type": "string" }, "text/plain": [ "'gemma-2-9b-it-Q6_K.gguf'" ] }, "execution_count": 4, "metadata": {}, "output_type": "execute_result" } ], "source": [ "from huggingface_hub import hf_hub_download\n", "\n", "# Specify the repository and filename\n", "repo_id = 'bartowski/gemma-2-9b-it-GGUF' # Repository containing the GGUF model\n", "filename = 'gemma-2-9b-it-Q6_K.gguf' # The GGUF model file\n", "\n", "# Download the model file to the current directory\n", "hf_hub_download(repo_id=repo_id, filename=filename, local_dir='.')" ] }, { "cell_type": "markdown", "metadata": { "id": "lr3K9wqMH0K-" }, "source": [ "### Running the Gemma 2 model using `llama-cpp-python`\n", "\n", "You will initialize the model using the `llama-cpp-python` library by loading a pre-trained Gemma 2 model from HuggingFace." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "VywSmn34_lhL" }, "outputs": [], "source": [ "!python -m llama_cpp.server \\\n", " --model gemma-2-9b-it-Q6_K.gguf \\\n", " --n_gpu_layers -1 \\\n", " --host 0.0.0.0 \\\n", " --port 8000 \\\n", " --chat_format gemma > server.log 2>&1 &\n", "!sleep 60" ] }, { "cell_type": "markdown", "metadata": { "id": "ZgU1KxkyYFDL" }, "source": [ "Hint: You can use `tail -f server.log` to preview the command's output if necessary." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "v3hoGJF6Wi8_" }, "outputs": [], "source": [ "from google.colab.output import eval_js\n", "\n", "# (Optional) View the API docs for the llama.cpp python server here\n", "# Uncomment the following line if you need to access the API docs\n", "# print(eval_js(\"google.colab.kernel.proxyPort(8000)\") + \"docs\")" ] }, { "cell_type": "markdown", "metadata": { "id": "import-libraries" }, "source": [ "### Import Libraries" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "import-packages" }, "outputs": [], "source": [ "import langfun as lf\n", "import pyglove as pg" ] }, { "cell_type": "markdown", "metadata": { "id": "configure-model" }, "source": [ "### Configuring the Gemma 2 Model\n", "\n", "To use the Gemma 2 model with Langfun, we need to create a custom llama.cpp Python class since Langfun may not have built-in support for it yet." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "E3BmMNQ1L_-k" }, "outputs": [], "source": [ "from typing import Any\n", "from langfun.core.llms import rest\n", "\n", "# Mostly similar to LLaMA C++ Langfun module from here:\n", "# https://github.com/google/langfun/blob/main/langfun/core/llms/llama_cpp.py\n", "# But the following class is repurposed to work with the llama-cpp-python library\n", "\n", "\n", "class LlamaCppPythonRemote(rest.REST):\n", " \"\"\"The remote llama.cpp Python model.\"\"\"\n", "\n", " @pg.explicit_method_override\n", " def __init__(self, url: str, model: str | None = None, **kwargs):\n", " # Use llama-cpp-python's completion URL scheme\n", " super().__init__(api_endpoint=f'{url}/completions', model=model, **kwargs)\n", "\n", " @property\n", " def model_id(self) -> str:\n", " \"\"\"Returns a string to identify the model.\"\"\"\n", " return f'LLaMAC++({self.model or \"\"})'\n", "\n", " def request(\n", " self, prompt: lf.core.Message, sampling_options: lf.core.LMSamplingOptions\n", " ) -> dict[str, Any]:\n", " \"\"\"Returns the JSON input for a message.\"\"\"\n", " request = dict()\n", " request.update(self._request_args(sampling_options))\n", " request['prompt'] = prompt.text\n", " return request\n", "\n", " def _request_args(self, options: lf.core.LMSamplingOptions) -> dict[str, Any]:\n", " \"\"\"Returns a dict as request arguments.\"\"\"\n", " args = dict(\n", " max_tokens=options.max_tokens or 1024,\n", " top_k=options.top_k or 50,\n", " top_p=options.top_p or 0.95,\n", " )\n", " if options.temperature is not None:\n", " args['temperature'] = options.temperature\n", " return args\n", "\n", " def result(self, json: dict[str, Any]) -> lf.core.LMSamplingResult:\n", " # Handle llama-cpp-python response handling\n", " return lf.core.LMSamplingResult(\n", " [lf.core.LMSample(item['choices'][0]['text'], score=0.0) for item in json['items']]\n", " )\n", "\n", " def _sample_single(self, prompt: lf.Message) -> lf.core.LMSamplingResult:\n", " request = self.request(prompt, self.sampling_options)\n", "\n", " def _sample_one_example(request):\n", " response = self._session.post(\n", " self.api_endpoint,\n", " json=request,\n", " timeout=self.timeout,\n", " )\n", " if response.status_code == 200:\n", " return response.json()\n", " else:\n", " error_cls = self._error_cls_from_status(response.status_code)\n", " raise error_cls(f'{response.status_code}: {response.content}')\n", "\n", " items = self._parallel_execute_with_currency_control(\n", " _sample_one_example,\n", " [request] * (self.sampling_options.n or 1),\n", " )\n", " return self.result(dict(items=items))" ] }, { "cell_type": "markdown", "metadata": { "id": "format-prompt" }, "source": [ "To programmatically add control tokens to any prompt for use with an instruction-tuned model, you can create a function that wraps the prompt with the necessary control tokens. Here’s an example Python function that can be reused to apply this formatting:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "define-prompt-function" }, "outputs": [], "source": [ "def format_gemma_prompt(user_input):\n", " \"\"\"\n", " Formats a prompt for the Gemma instruction-tuned model by adding control tokens.\n", " Here you'll be instructing the model to adhere to Langfun and PyGlove.\n", "\n", " Parameters:\n", " user_input (str): The input from the user to be formatted.\n", "\n", " Returns:\n", " str: The formatted prompt with Gemma control tokens.\n", " \"\"\"\n", " return f\"\"\"\\\n", " You are an AI assistant designed to work seamlessly with Langfun and PyGlove. Your task is to parse user inputs into complex object representations compatible with these libraries. Always output as an object and nothing else. Do not include any additional text, explanations, or comments.\n", "\n", " Guidelines:\n", " Output Format: Always provide your output as an object representation suitable for Langfun and PyGlove.\n", " Consistency: Do not deviate from producing the object. Ensure your output is valid and can be used directly in code.\n", " No Extra Text: Exclude any additional explanations or comments outside of the object.\n", "\n", " Technical Details:\n", " Your goal is to help parse user inputs into object representations that can be directly utilized within Langfun and PyGlove code.\n", "\n", " <start_of_turn>user\n", " {user_input}\n", " <end_of_turn>\n", " <start_of_turn>model\n", " \"\"\"" ] }, { "cell_type": "markdown", "metadata": { "id": "using-langfun" }, "source": [ "### Using Langfun with Gemma 2 from Google AI\n", "\n", "Langfun allows you to define a clear schema and effortlessly convert natural language inputs into structured objects. These objects can then be directly plugged into external systems, databases, or external APIs, facilitating seamless data flow and integration without the need for manual parsing or data cleaning.\n" ] }, { "cell_type": "markdown", "metadata": { "id": "VtVhMn_hNlCB" }, "source": [ "First, let's try a simple use case by asking Langfun to represent a simple mathematical statement's final answer using a Python class `Answer`" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "wQK5MrmxNjOF" }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Answer(\n", " result = 3\n", ")\n" ] } ], "source": [ "class Answer(pg.Object):\n", " result: int\n", "\n", "r = lf.query(\n", " prompt=format_gemma_prompt('The result of one plus two is three'),\n", " schema=Answer,\n", " lm=LlamaCppPythonRemote(\"http://0.0.0.0:8000/v1\")\n", ")\n", "print(r)" ] }, { "cell_type": "markdown", "metadata": { "id": "thxM1dtFefLP" }, "source": [ "Next, you'll take a look at real-world use cases demonstrating how you can leverage Langfun and Gemma 2 to build robust applications." ] }, { "cell_type": "markdown", "metadata": { "id": "Rkh5Jyz5BlBk" }, "source": [ "### Parsing a Job Description into a Structured Object\n", "\n", "\n", "Automating the recruitment process can significantly enhance efficiency and accuracy in HR departments. By parsing job descriptions into structured objects, you can seamlessly integrate this data into your HR systems, enabling functionalities like resume matching, job postings, and analytics.\n", "\n", "Let's start first by defining a `Job` schema using PyGlove, then provide a job description as input. Langfun, powered by the Gemma 2 model, processes the text and extracts relevant information, populating the `Job` object accordingly.\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "parse-job-description" }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Job(\n", " title = 'Software Engineer',\n", " company = 'TechCorp',\n", " location = 'San Francisco',\n", " responsibilities = [\n", " 0 : 'developing software solutions',\n", " 1 : 'collaborating with cross-functional teams',\n", " 2 : 'participating in code reviews'\n", " ],\n", " requirements = [\n", " 0 : \"Bachelor's degree in Computer Science\",\n", " 1 : '3+ years of experience in software development',\n", " 2 : 'proficiency in Python and JavaScript'\n", " ]\n", ")\n" ] } ], "source": [ "# Define the Job schema\n", "class Job(pg.Object):\n", " title: str\n", " company: str\n", " location: str\n", " responsibilities: list[str]\n", " requirements: list[str]\n", "\n", "# Job description text\n", "job_description = \"\"\"\n", "We are looking for a Software Engineer to join our team at TechCorp in San Francisco.\n", "Responsibilities include developing software solutions, collaborating with cross-functional teams, and participating in code reviews.\n", "Requirements: Bachelor's degree in Computer Science, 3+ years of experience in software development, proficiency in Python and JavaScript.\n", "\"\"\"\n", "\n", "# Parse the job description\n", "result = lf.query(\n", " prompt=format_gemma_prompt(job_description),\n", " schema=Job,\n", " lm=LlamaCppPythonRemote(\"http://0.0.0.0:8000/v1\")\n", ")\n", "print(result)" ] }, { "cell_type": "markdown", "metadata": { "id": "c4TQL0qNGedG" }, "source": [ "This structured data can then be used directly in your HR systems or external APIs for various purposes such as job matching and integration with Applicant Tracking Systems (ATS) for streamlined recruitment workflows." ] }, { "cell_type": "markdown", "metadata": { "id": "job-description-explanation" }, "source": [ "### Enabling Automated Handling of Support Tickets\n", "\n", "In this second example, you'll create a support ticket by incorporating variables into your prompt. In customer service applications, generating support tickets efficiently is crucial for timely issue resolution. By creating structured support tickets from user inputs, you can integrate seamlessly with ticketing systems, prioritize issues, and assign them to the appropriate teams.\n", "\n", "Define a `SupportTicket` schema and use variables like `customer_name` and `issue` in your prompt. Langfun processes these variables within the context, generating a structured `SupportTicket` object that can be integrated into your customer service systems or external APIs. This lets you automatically create tickets in a system, easily track ticket statuses and generate reports." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "fO19wo73Raje" }, "outputs": [], "source": [ "customer_name = 'John Doe'\n", "issue = 'I am unable to log into my account.'" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "create-support-ticket" }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "SupportTicket(\n", " customer_name = 'John Doe',\n", " issue = 'I am unable to log into my account.',\n", " steps_taken = [\n", " 0 : 'Verify username and password accuracy.',\n", " 1 : 'Attempt password reset.',\n", " 2 : 'Check for browser cache or cookie issues.'\n", " ],\n", " resolution = None\n", ")\n" ] } ], "source": [ "# Define the SupportTicket schema\n", "class SupportTicket(pg.Object):\n", " customer_name: str\n", " issue: str\n", " steps_taken: list[str]\n", " resolution: str | None\n", "\n", "# Prompt with variables\n", "prompt = 'Create a support ticket for {{customer_name}} who reported: \"{{issue}}\" and propose steps to be taken along with a final resolution'\n", "\n", "# Generate the support ticket\n", "result = lf.query(\n", " prompt=format_gemma_prompt(prompt),\n", " customer_name=customer_name,\n", " issue=issue,\n", " schema=SupportTicket,\n", " lm=LlamaCppPythonRemote(\"http://0.0.0.0:8000/v1\")\n", ")\n", "print(result)" ] }, { "cell_type": "markdown", "metadata": { "id": "iy3mKTOzHbpK" }, "source": [ "In a scenario where your customer service team receives numerous inquiries daily, by automating the creation of support tickets, you can ensure that each issue is documented consistently and efficiently. The structured `SupportTicket` objects can be directly fed into your CRM or ticketing software, facilitating faster response times and better customer satisfaction.\n" ] }, { "cell_type": "markdown", "metadata": { "id": "support-ticket-explanation" }, "source": [ "### Parsing Customer Feedback into Structured Data\n", "\n", "Understanding customer feedback is essential for improving products and services. You'll start by first defining a `Feedback` schema and provide customer feedback text. Langfun processes the feedback, determining sentiment and identifying key topics, and outputs a structured `Feedback` object. This allows you to parse customer feedback into structured data, aiding in sentiment analysis, report generation and topic modeling." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "parse-feedback" }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Feedback(\n", " customer_id = 1,\n", " feedback_text = 'I love the new features in the app, but it crashes frequently when I try to upload photos.',\n", " sentiment = 'mixed',\n", " topics = [\n", " 0 : 'new features',\n", " 1 : 'app crashes',\n", " 2 : 'photo uploads'\n", " ]\n", ")\n" ] } ], "source": [ "# Define the Feedback schema\n", "from typing import Literal\n", "\n", "class Feedback(pg.Object):\n", " customer_id: int\n", " feedback_text: str\n", " sentiment: Literal['positive', 'negative', 'mixed']\n", " topics: list[str]\n", "\n", "# Customer feedback text\n", "feedback_text = 'I love the new features in the app, but it crashes frequently when I try to upload photos.'\n", "\n", "# Context for the model\n", "context = f'Parse the following customer (id: 1) feedback into an object representation (Feedback): {feedback_text}'\n", "\n", "# Parse the feedback\n", "result = lf.query(\n", " prompt=format_gemma_prompt(context),\n", " schema=Feedback,\n", " lm=LlamaCppPythonRemote(\"http://0.0.0.0:8000/v1\")\n", ")\n", "print(result)" ] }, { "cell_type": "markdown", "metadata": { "id": "fM2w-_agEVIL" }, "source": [ "This example shows how the model can extract valuable insights from customer feedback, which is crucial for product improvement. Imagine you manage a mobile app with a large user base. By systematically parsing feedback, you can identify which features are well-received and which areas require attention. For instance, detecting frequent crashes related to photo uploads allows your development team to prioritize bug fixes, enhancing the overall user experience.\n" ] }, { "cell_type": "markdown", "metadata": { "id": "feedback-explanation" }, "source": [ "### Classifying Support Tickets Using `Literal` Types\n", "\n", "Efficiently categorizing and prioritizing support tickets can significantly enhance your ticket triaging system. Proper classification ensures that issues are addressed by the right teams promptly, improving resolution times and customer satisfaction.\n", "\n", "Define a `Ticket` schema with specific categories and priorities. Provide a support ticket description, and Langfun processes it to classify the ticket accordingly. Next, let's perform classification tasks using `Literal` types from the `typing` module, which is useful for ticket triaging systems." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "classify-ticket" }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Ticket(\n", " id = 12345,\n", " issue = 'Customer reports that they are unable to update their billing information on the website.',\n", " priority = 'medium',\n", " category = 'billing',\n", " assigned_to = None\n", ")\n" ] } ], "source": [ "# Define the Ticket schema\n", "class Ticket(pg.Object):\n", " id: int\n", " issue: str\n", " priority: Literal['low', 'medium', 'high']\n", " category: Literal['billing', 'technical', 'account', 'other']\n", " assigned_to: str | None\n", "\n", "# Support ticket description\n", "ticket_description = 'Ticket ID: 12345. Customer reports that they are unable to update their billing information on the website.'\n", "\n", "# Context for classification\n", "context = f'Classify the following support ticket into the Ticket object: {ticket_description}'\n", "\n", "# Classify the ticket\n", "result = lf.query(\n", " prompt=format_gemma_prompt(context),\n", " schema=Ticket,\n", " lm=LlamaCppPythonRemote(\"http://0.0.0.0:8000/v1\")\n", ")\n", "print(result)" ] }, { "cell_type": "markdown", "metadata": { "id": "0L4XZrBcEjM-" }, "source": [ "In a customer service center, tickets often vary widely in nature and urgency. By automatically classifying tickets, you ensure that billing issues are handled by a finance team, technical problems by the IT department, and so forth. Prioritizing tickets based on their severity helps in managing workloads effectively and ensuring that high-priority issues are resolved swiftly. This demonstrates the model's ability to categorize and prioritize support tickets automatically.\n" ] }, { "cell_type": "markdown", "metadata": { "id": "workout-plan-explanation" }, "source": [ "### Chain-of-Thought Reasoning\n", "\n", "Chain-of-thought (CoT) reasoning involves breaking down complex problems into a series of logical steps. This approach enhances the model's ability to solve intricate tasks by systematically reasoning through each component.\n", "\n", "In this example, you'll demonstrate the model's capability in performing CoT reasoning for a simple puzzle.\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "sYiTwC8S_ypU" }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Solution(\n", " question = \"How much in dollars does she make every day at the farmers' market?\",\n", " steps = [\n", " 0 : Step(\n", " description = 'Eggs laid daily',\n", " step_output = 16.0\n", " ),\n", " 1 : Step(\n", " description = 'Eggs eaten for breakfast',\n", " step_output = -3.0\n", " ),\n", " 2 : Step(\n", " description = 'Eggs used for baking',\n", " step_output = -4.0\n", " ),\n", " 3 : Step(\n", " description = 'Total eggs remaining',\n", " step_output = 9.0\n", " ),\n", " 4 : Step(\n", " description = 'Earnings from selling eggs',\n", " step_output = 18.0\n", " )\n", " ],\n", " final_answer = 18\n", ")\n" ] } ], "source": [ "from typing import Optional, List\n", "\n", "class Step(pg.Object):\n", " description: str\n", " step_output: float\n", "\n", "class Solution(pg.Object):\n", " question: str\n", " steps: list[Step]\n", " final_answer: int\n", "\n", "\n", "# Puzzle question\n", "question = (\n", " 'Janet’s ducks lay 16 eggs per day. She eats three for breakfast every morning and bakes muffins for her friends every day with four. '\n", " 'She sells the remainder at the farmers\\' market daily for $2 per fresh duck egg. '\n", " 'How much in dollars does she make every day at the farmers\\' market? '\n", " 'Solve the puzzle and use Solution to represent it'\n", ")\n", "\n", "# Assume gemma2_llm is your initialized Gemma 9b model\n", "result = lf.query(\n", " prompt=format_gemma_prompt(question),\n", " schema=Solution,\n", " lm=LlamaCppPythonRemote(\"http://0.0.0.0:8000/v1\")\n", ")\n", "print(result)" ] }, { "cell_type": "markdown", "metadata": { "id": "financial-calculations-explanation" }, "source": [ "This demonstrates the model's capability in performing chain-of-thought reasoning.\n" ] }, { "cell_type": "markdown", "metadata": { "id": "A90firK-Aa0W" }, "source": [ "In this notebook, you've explored how to leverage **Langfun**, **PyGlove**, **llama.cpp** and the **Gemma 2** model from **Hugging Face** to address a variety of real-world scenarios. Through practical use cases—including parsing job descriptions, creating support tickets, analyzing customer feedback, classifying support tickets, and performing chain-of-thought reasoning—you've seen how Langfun effortlessly converts unstructured text into structured, actionable objects. This showcases the model's effectiveness in real-world scenarios, enhancing the capabilities of your applications and seamlessly integrating with external systems.\n", "\n", "Feel free to extend these examples to fit your specific needs and explore the full potential of Langfun and Gemma in your projects!\n" ] } ], "metadata": { "accelerator": "GPU", "colab": { "name": "[Gemma_2]Using_with_Langfun_and_LlamaCpp_Python_Bindings.ipynb", "toc_visible": true }, "kernelspec": { "display_name": "Python 3", "name": "python3" } }, "nbformat": 4, "nbformat_minor": 0 }